Goto

Collaborating Authors

 natural person


How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity

Alexander, Heather J., Simon, Jonathan A., Pinard, Frédéric

arXiv.org Artificial Intelligence

The law draws a sharp distinction between objects and persons, and between two kinds of persons, the ''fictional'' kind (i.e. corporations), and the ''non-fictional'' kind (individual or ''natural'' persons). This paper will assess whether we maximize overall long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced, individuated AI systems (giving these fictional legal persons derogable rights and duties associated with certified groups of existing persons, potentially including free speech, contract rights, and standing to sue ''on behalf of'' the AI system), or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced, individuated AI systems (recognizing them as entities meriting legal standing with non-derogable rights which for the human case include life, due process, habeas corpus, freedom from slavery, and freedom of conscience). We will clarify the meaning and implications of each option along the way, considering liability, copyright, family law, fundamental rights, civil rights, citizenship, and AI safety regulation. We will tentatively find that the non-fictional personhood approach may be best from a coherence perspective, for at least some advanced AI systems. An object approach may prove untenable for sufficiently humanoid advanced systems, though we suggest that it is adequate for currently existing systems as of 2025. While fictional personhood would resolve some coherence issues for future systems, it would create others and provide solutions that are neither durable nor fit for purpose. Finally, our review will suggest that ''hybrid'' approaches are likely to fail and lead to further incoherence: the choice between object, fictional person and non-fictional person is unavoidable.


An Artificial Intelligence Value at Risk Approach: Metrics and Models

Alvarez, Luis Enriquez

arXiv.org Artificial Intelligence

Artificial intelligence risks are multidimensional in nature, as the same risk scenarios may have legal, operational, and financial risk dimensions. With the emergence of new AI regulations, the state of the art of artificial intelligence risk management seems to be highly immature due to upcoming AI regulations. Despite the appearance of several methodologies and generic criteria, it is rare to find guidelines with real implementation value, considering that the most important issue is customizing artificial intelligence risk metrics and risk models for specific AI risk scenarios. Furthermore, the financial departments, legal departments and Government Risk Compliance teams seem to remain unaware of many technical aspects of AI systems, in which data scientists and AI engineers emerge as the most appropriate implementers. It is crucial to decompose the problem of artificial intelligence risk in several dimensions: data protection, fairness, accuracy, robustness, and information security. Consequently, the main task is developing adequate metrics and risk models that manage to reduce uncertainty for decision-making in order to take informed decisions concerning the risk management of AI systems. The purpose of this paper is to orientate AI stakeholders about the depths of AI risk management. Although it is not extremely technical, it requires a basic knowledge of risk management, quantifying uncertainty, the FAIR model, machine learning, large language models and AI context engineering. The examples presented pretend to be very basic and understandable, providing simple ideas that can be developed regarding specific AI customized environments. There are many issues to solve in AI risk management, and this paper will present a holistic overview of the inter-dependencies of AI risks, and how to model them together, within risk scenarios.


New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses

McDuff, Daniel, Korjakow, Tim, Klyman, Kevin, Contractor, Danish

arXiv.org Artificial Intelligence

Foundation models have had a transformative impact on AI. A combination of large investments in research and development, growing sources of digital data for training, and architectures that scale with data and compute has led to models with powerful capabilities. Releasing assets is fundamental to scientific advancement and commercial enterprise. However, concerns over negligent or malicious uses of AI have led to the design of mechanisms to limit the risks of the technology. The result has been a proliferation of licenses with behavioral-use clauses and acceptable-use-policies that are increasingly being adopted by commonly used families of models (Llama, Gemma, Deepseek) and a myriad of smaller projects. We created and deployed a custom AI licenses generator to facilitate license creation and have quantitatively and qualitatively analyzed over 300 customized licenses created with this tool. Alongside this we analyzed 1.7 million models licenses on the HuggingFace model hub. Our results show increasing adoption of these licenses, interest in tools that support their creation and a convergence on common clause configurations. In this paper we take the position that tools for tracking adoption of, and adherence to, these licenses is the natural next step and urgently needed in order to ensure they have the desired impact of ensuring responsible use.


The Artificial Intelligence Act: critical overview

Silva, Nuno Sousa e

arXiv.org Artificial Intelligence

This article provides a critical overview of the recently approved Artificial Intelligence Act. It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689. A definition of key concepts follows, and then the material and territorial scope, as well as the timing of application, are analyzed. Although the Regulation does not explicitly set out principles, the main ideas of fairness, accountability, transparency, and equity in AI underly a set of rules of the regulation. This is discussed before looking at the ill-defined set of forbidden AI practices (manipulation and e exploitation of vulnerabilities, social scoring, biometric identification and classification, and predictive policing). It is highlighted that those rules deal with behaviors rather than AI systems. The qualification and regulation of high-risk AI systems are tackled, alongside the obligation of transparency for certain systems, the regulation of general-purpose models, and the rules on certification, supervision, and sanctions. The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose of promoting responsible innovation within the European Union and beyond its borders.


Lawma: The Power of Specialization for Legal Tasks

Dominguez-Olmedo, Ricardo, Nanda, Vedant, Abebe, Rediet, Bechtold, Stefan, Engel, Christoph, Frankenreiter, Jens, Gummadi, Krishna, Hardt, Moritz, Livermore, Michael

arXiv.org Artificial Intelligence

Annotation and classification of legal text are central components of empirical legal research. Traditionally, these tasks are often delegated to trained research assistants. Motivated by the advances in language modeling, empirical legal scholars are increasingly turning to prompting commercial models, hoping that it will alleviate the significant cost of human annotation. Despite growing use, our understanding of how to best utilize large language models for legal tasks remains limited. We conduct a comprehensive study of 260 legal text classification tasks, nearly all new to the machine learning community. Starting from GPT-4 as a baseline, we show that it has non-trivial but highly varied zero-shot accuracy, often exhibiting performance that may be insufficient for legal work. We then demonstrate that a lightly fine-tuned Llama 3 model vastly outperforms GPT-4 on almost all tasks, typically by double-digit percentage points. We find that larger models respond better to fine-tuning than smaller models. A few tens to hundreds of examples suffice to achieve high classification accuracy. Notably, we can fine-tune a single model on all 260 tasks simultaneously at a small loss in accuracy relative to having a separate model for each task. Our work points to a viable alternative to the predominant practice of prompting commercial models. For concrete legal tasks with some available labeled data, researchers are better off using a fine-tuned open-source model.


AI Safety: Necessary, but insufficient and possibly problematic

P, Deepak

arXiv.org Artificial Intelligence

This article critically examines the recent hype around AI safety. We first start with noting the nature of the AI safety hype as being dominated by governments and corporations, and contrast it with other avenues within AI research on advancing social good. We consider what 'AI safety' actually means, and outline the dominant concepts that the digital footprint of AI safety aligns with. We posit that AI safety has a nuanced and uneasy relationship with transparency and other allied notions associated with societal good, indicating that it is an insufficient notion if the goal is that of societal good in a broad sense. We note that the AI safety debate has already influenced some regulatory efforts in AI, perhaps in not so desirable directions. We also share our concerns on how AI safety may normalize AI that advances structural harm through providing exploitative and harmful AI with a veneer of safety.


UK Supreme Court rules AI can't be a patent inventor, 'must be a natural person'

Engadget

AI may or may not take people's jobs in years to come, but in the meantime, there's one thing they cannot obtain: patents. Dr. Stephen Thaler has spent years trying to get patents for two inventions created by his AI "creativity machine" DABUS. Now, the United Kingdom's Supreme Court has rejected his appeal to approve these patents when listing DABUS as the inventor, Reuters reports. The court's rationale stems from a provision in UK patent law that states, "an inventor must be a natural person." The ruling stipulated that the appeal was unconcerned with whether this should change in the future. "The judgment establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines," Thaler's lawyers said in a statement.


Quantitative study about the estimated impact of the AI Act

Hauer, Marc P., Krafft, Tobias D, Sesing-Wagenpfeil, Dr. Andreas, Zweig, Prof. Katharina

arXiv.org Artificial Intelligence

With the Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (AI Act) the European Union provides the first regulatory document that applies to the entire complex of AI systems. While some fear that the regulation leaves too much room for interpretation and thus bring little benefit to society, others expect that the regulation is too restrictive and, thus, blocks progress and innovation, as well as hinders the economic success of companies within the EU. Without a systematic approach, it is difficult to assess how it will actually impact the AI landscape. In this paper, we suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021. We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists, and then classified them according to the AI Act together with experts from the fields of computer science and law. Our study shows a need for more concrete formulation, since for some provisions it is often unclear whether they are applicable in a specific case or not. Apart from that, it turns out that only about 30\% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk. However, as the database is not representative, the results only provide a first assessment. The process presented can be applied to any collections, and also repeated when regulations are about to change. This allows fears of over- or under-regulation to be investigated before the regulations comes into effect.


Motion Planning allows readers understand humanoid research

#artificialintelligence

Humanoid robots have bodies that are designed to look like human beings. Most humanoid robots feature a torso, a head, two arms, and two legs as a natural person. However, some of them may resemble human body parts. One of the most notable differences between humanoid robots and other robots when it comes to mobility is the capacity of humanoid robots to execute human-like movements such as biped gait. One of the most critical stages in increasing humanoid robots' total autonomy and usefulness is precise obstacle avoidance software and motion planning.


Thaler v. Vidal: The Federal Circuit Nixes Artificial Intelligence As Inventor - Patent - United States

#artificialintelligence

Patent prosecutors should consider drafting claims to avoid the situation where the AI is the only entity providing an inventive contribution. Artificial intelligence (AI) is making an impact in our everyday life. AI helps us cut grass and vacuum living rooms. It helps us identify images for applications from waste sorting to medical diagnosis. AI also is making an impact in research and development.